I am a transdisciplinary optimist dedicated to exploring, quantifying, understanding, and manipulating light radiation through a synergistic combination of hardware and software. My research focuses on computational optical imaging and display across multiple domains: 3D (complex wave field or volumetric scenes) and 4D (space–time domain).

I pursue this by embedding computation directly into acquisition and processing pipelines—whether through novel hardware and system designs, advanced algorithms, or their integration. I develop generalizable tools that address fundamental challenges in optical imaging through a progressive four-stage approach:

  • Theory: rigorous formulations of the underlying physical phenomena.
  • Numerical modeling: establishing numerical forward models of the physical systems and exploring the linkages among key components of imaging systems.
  • Inverse computation: solving forward models to recover object internal states.
  • Applications: facilitating cross-disciplinary practical scenarios through the full workflow outlined above, while accommodating domain-specific constraints.

My overarching objective is to bridge the gaps among the core components of computational imaging systems, thereby overcoming the limitations of fragmented co-design. This endeavor encompasses:

  • Differentiable Imaging: Developing uncertainty-aware, co-designed optical imaging systems.
  • Advancing imaging modalities: Extending holography, light field imaging, coherent diffraction imaging, ptychography, microscopy, and etc., for robust 3D and 4D imaging. Exploring the physical and computational boundaries of imaging systems and pushing beyond them.

By weaving together theory, modeling, computation, and application, my goal is to uncover novel potential in how we capture, process, and visualize the wealth of light-based information—ultimately enhancing our capacity to perceive the universe that surrounds us.